10 research outputs found

    Software Defined Radio Implementation of Carrier and Timing Synchronization for Distributed Arrays

    Full text link
    The communication range of wireless networks can be greatly improved by using distributed beamforming from a set of independent radio nodes. One of the key challenges in establishing a beamformed communication link from separate radios is achieving carrier frequency and sample timing synchronization. This paper describes an implementation that addresses both carrier frequency and sample timing synchronization simultaneously using RF signaling between designated master and slave nodes. By using a pilot signal transmitted by the master node, each slave estimates and tracks the frequency and timing offset and digitally compensates for them. A real-time implementation of the proposed system was developed in GNU Radio and tested with Ettus USRP N210 software defined radios. The measurements show that the distributed array can reach a residual frequency error of 5 Hz and a residual timing offset of 1/16 the sample duration for 70 percent of the time. This performance enables distributed beamforming for range extension applications.Comment: Submitted to 2019 IEEE Aerospace Conferenc

    Routing in Mobile Ad-Hoc Networks using Social Tie Strengths and Mobility Plans

    Full text link
    We consider the problem of routing in a mobile ad-hoc network (MANET) for which the planned mobilities of the nodes are partially known a priori and the nodes travel in groups. This situation arises commonly in military and emergency response scenarios. Optimal routes are computed using the most reliable path principle in which the negative logarithm of a node pair's adjacency probability is used as a link weight metric. This probability is estimated using the mobility plan as well as dynamic information captured by table exchanges, including a measure of the social tie strength between nodes. The latter information is useful when nodes deviate from their plans or when the plans are inaccurate. We compare the proposed routing algorithm with the commonly-used optimized link state routing (OLSR) protocol in ns-3 simulations. As the OLSR protocol does not exploit the mobility plans, it relies on link state determination which suffers with increasing mobility. Our simulations show considerably better throughput performance with the proposed approach as compared with OLSR at the expense of increased overhead. However, in the high-throughput regime, the proposed approach outperforms OLSR in terms of both throughput and overhead

    Quantization Strategies For Low-Power Communications

    No full text
    Power reduction in digital communication systems can be achieved in many ways. Reduction of the wordlengths used to represent data and control variables in the digital circuits comprising a communication system is an effective strategy, as register power consumption increases with wordlength. Another strategy is the reduction of the required data transmission rate, and hence speed of the digital circuits, by efficient source encoding. In this dissertation, applications of both of these power reduction strategies are investigated. The LMS adaptive filter, for whichamyriad of applications exists in digital communication systems, is optimized for performance with a power consumption constraint. This optimization is achieved by an analysis of the effects of wordlength reduction on both performance - transient and steady-state - as well as power consumption. Analytical formulas for the residual steady-state mean square error (MSE) due to quantization versus wordlength of data and coefficient registers are used to determine the optimal allocation of bits to data versus coefficients under a power constraint. A condition on the wordlengths is derived under which the potentially hazardous transient "slowdown" phenomenon is avoided. The algorithm is then optimized for no slowdown and minimum MSE. Numerical studies are presented for the case of LMS channel equalization. Next, source encoding byvector quantization is studied for distributed hypothesis testing environments with simple binary hypotheses. It is shown that, in some cases, low-rate quantizers exist that cause no degradation in hypothesis testing performance. These cases are, however, uncommon. For the majority of cases, in which quantiza..

    High rate vector quantization for detection

    No full text
    We investigate high rate quantization for various detection and reconstruction loss criteria. A new distortion measure is introduced which accounts for global loss in best attainable binary hypothesis testing performance. The distortion criterion is related to the area under the receiver-operating-characteristic (ROC) curve. Specifically, motivated by Sanov’s theorem, we define a performance curve as the trajectory of the pair of optimal asymptotic Type I and Type II error rates of the most powerful Neyman-Pearson test of the hypotheses. The distortion measure is then defined as the difference between the area-underthe-curve (AUC) of the optimal pre-encoded hypothesis test and the AUC of the optimal post-encoded hypothesis test. As compared to many previously introduced distortion measures for decision making, this distortion measure has the advantage of being independent of any detection thresholds or priors on the hypotheses, which are generally difficult to specify in the code design process. A high resolution Zador-Gersho type of analysis is applied to characterize the point density and the inertial profile associated with the optimal high rate vector quantizer. The analysis applies to a restricted class of high-rate quantizers that have bounded cells with vanishing volumes. The optimal point density is used to specify a Lloydtype algorithm which allocates its finest resolution to regions where the gradient of the pre-encode

    Power vs. Performance Tradeoffs for Reduced Resolution LMS Adaptive Filters

    No full text
    Low power implementation of digital adaptive filters for channel equalization and interference cancelling is an important aspect of wireless communications transceiver design. In this paper we provide a low power design methodology for the widely used finite precision LMS adaptive cahnnel equalizer. We first derive expressions for the increase in mean square error of the digital adaptive filter when the total power dissipated is decreased by reducing both the number of data bits and filter coefficient bits. We then derive a constraint on the data and coefficient wordlengths to avoid the so called stopping, or slowdown, phenomenon. We also obtain expressions for the power-optimal bit-allocation factor which determines the optimal proportion of bits to allocate to the data, the remainder being allocated to the coefficients. Numerical studies are presented for an exponential memory ISI channel with Gaussian training sequence. Keywords Roundoff and Quantization Analysis, Low-Power Channel..

    Performance and Complexity Analysis of VLSI Multi-Carrier Receivers for Low-Energy Wireless Communications

    No full text
    A high data rate multi-carrier receiver employing orthogonal frequency division multiplexing (OFDM) for mobile communications requires joint low-power VLSI design optimization of the channel equalizer and demodulator. Although the multi-carrier receiver structure can be greatly simplified by use of the Fast Fourier Transform (FFT) for demodulation, the physical hardware complexity of the FFT is still significant. By introducing a powerful channel equalizer, reduction in FFT complexity can be achieved while maintaining sufficient reception quality. This paper analyzes the performance and hardware complexity of the receiver structure in a multipath communications environment. I. INTRODUCTION High data rate communication in highly-mobile environments is increasingly becoming desirable [1]. For single carrier systems, this would either entail using a very complex constellation with many bits per symbol or a very high symbol rate [2]. Using a dense signaling constellation is undesirable f..
    corecore